Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning

نویسندگان

چکیده

In this work, we investigate stochastic quasi-Newton methods for minimizing a finite sum of cost functions over decentralized network. We first develop general algorithmic framework, in which each node constructs local, inexact direction that asymptotically approaches the global, exact one at time step. To do so, local gradient approximation is constructed using dynamic average consensus to track variance-reduced gradients entire network, followed by proper Hessian inverse approximation. show under standard convexity and smoothness assumptions on functions, obtained from our framework converge linearly optimal solution if approximations used have uniformly bounded positive eigenvalues. construct with said boundedness property, design two fully methods—namely, damped regularized limited-memory DFP (Davidon-Fletcher-Powell) BFGS (Broyden-Fletcher-Goldfarb-Shanno)—which use fixed moving window past decision variables adaptively approximations. A noteworthy feature these they not require extra sampling or communication. Numerical results proposed are much faster than existing first-order algorithms.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Variance Reduced Stochastic Newton Method

Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...

متن کامل

Stochastic Variance-Reduced Cubic Regularized Newton Method

We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an ( , √ )-approximately local minimum within Õ(n/ ) second-order oracl...

متن کامل

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle (SFO). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. ...

متن کامل

Quasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization

Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.

متن کامل

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve accuracy in 2-Wasserstein distance, our algorithm achieves Õ ( n+ κd/ + κdn/ 2/3 ) gradient complexity (i.e., numb...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Signal Processing

سال: 2023

ISSN: ['1053-587X', '1941-0476']

DOI: https://doi.org/10.1109/tsp.2023.3240652